2 research outputs found
Self-supervised pretraining for object detection in autonomous driving
The detection of road agents, such as vehicles
and pedestrians are central in autonomous driving. Self-Supervised Learning (SSL) has been
proven to be an effective technique for learning discriminative feature representations for image classification, alleviating the need for labels,
a remarkable advancement considering how timeconsuming and expensive labeling can be in autonomous driving. In this paper, we investigate the
effectiveness of contrastive SSL techniques such
as BYOL and MOCO on the object (agent) detection task using the ROad event Awareness Dataset
(ROAD) and BDD100K benchmarks. Our experiments show that using self-supervised pretraining,
we can achieve a 3.96 and 0.78 percentage points
improvement on the AP50 metric on the ROAD
and BDD100K benchmarks for the object detection
task compared to supervised pretraining. Extensive comparisons and evaluations of current stateof-the-art SSL methods (namely MOCO, BYOL,
SCRL) are conducted and reported for the object
detection task